概率密度演化的推导提供了对许多随机系统及其性能的行为的宝贵洞察力。但是,对于大多数实时应用程序,对概率密度演变的数值确定是一项艰巨的任务。后者是由于所需的时间和空间离散方案引起的,这些方案使大多数计算解决方案过于效率和不切实际。在这方面,有效的计算替代模型的开发至关重要。关于物理受限网络的最新研究表明,可以通过编码对深神经网络的物理洞察力来实现合适的替代物。为此,目前的工作介绍了Deeppdem,它利用物理信息网络的概念通过提出深度学习方法来解决概率密度的演变。 Deeppdem了解随机结构的一般密度演化方程(GDEE)。这种方法为无网格学习方法铺平了道路,该方法可以通过以前的模拟数据解决密度演化问题。此外,它还可以作为优化方案或实时应用程序中任何其他时空点的溶液的有效替代物。为了证明所提出的框架的潜在适用性,研究了两个具有不同激活功能的网络体系结构以及两个优化器。关于三个不同问题的数值实施验证了所提出方法的准确性和功效。
translated by 谷歌翻译
Graph Neural Networks (GNNs) are a family of graph networks inspired by mechanisms existing between nodes on a graph. In recent years there has been an increased interest in GNN and their derivatives, i.e., Graph Attention Networks (GAT), Graph Convolutional Networks (GCN), and Graph Recurrent Networks (GRN). An increase in their usability in computer vision is also observed. The number of GNN applications in this field continues to expand; it includes video analysis and understanding, action and behavior recognition, computational photography, image and video synthesis from zero or few shots, and many more. This contribution aims to collect papers published about GNN-based approaches towards computer vision. They are described and summarized from three perspectives. Firstly, we investigate the architectures of Graph Neural Networks and their derivatives used in this area to provide accurate and explainable recommendations for the ensuing investigations. As for the other aspect, we also present datasets used in these works. Finally, using graph analysis, we also examine relations between GNN-based studies in computer vision and potential sources of inspiration identified outside of this field.
translated by 谷歌翻译
In today's uncertain and competitive market, where enterprises are subjected to increasingly shortened product life-cycles and frequent volume changes, reconfigurable manufacturing systems (RMS) applications play a significant role in the manufacturing industry's success. Despite the advantages offered by RMS, achieving a high-efficiency degree constitutes a challenging task for stakeholders and decision-makers when they face the trade-off decisions inherent in these complex systems. This study addresses work tasks and resource allocations to workstations together with buffer capacity allocation in RMS. The aim is to simultaneously maximize throughput and minimize total buffer capacity under fluctuating production volumes and capacity changes while considering the stochastic behavior of the system. An enhanced simulation-based multi-objective optimization (SMO) approach with customized simulation and optimization components is proposed to address the abovementioned challenges. Apart from presenting the optimal solutions subject to volume and capacity changes, the proposed approach support decision-makers with discovered knowledge to further understand the RMS design. In particular, this study presents a problem-specific customized SMO combined with a novel flexible pattern mining method for optimizing RMS and conducting post-optimal analyzes. To this extent, this study demonstrates the benefits of applying SMO and knowledge discovery methods for fast decision-support and production planning of RMS.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在本文中,我们提出了一个样本复杂性,以从嘈杂的样本中学习单纯形。给出了$ n $的数据集,其中包括i.i.d.样品从$ \ mathbb {r}^k $中的未知任意单纯形上的均匀分布中得出,其中假定样品被任意幅度的加性高斯噪声损坏。我们提出了一种策略,该策略可以输出一个单纯概率,总变化距离为$ \ epsilon + o \ left(\ mathrm {snr}^{ - 1} \ right)$从true Simplex中,对于任何$ \ Epsilon> 0 $。我们证明,要接近True Simplex,就足以拥有$ n \ ge \ tilde {o} \ left(k^2/\ epsilon^2 \ right)$ samples。在这里,SNR代表信噪比,可以看作是单纯形直径与噪声的标准偏差的比率。我们的证明是基于样品压缩技术的最新进步,这些进步已经显示出在高维高斯混合模型中的密度估计的紧密范围方面的承诺。
translated by 谷歌翻译
唇裂是一种先天性异常,需要专家手术修复。外科医生必须具有丰富的经验和理论知识才能进行手术,并且已经提出了人工智能(AI)方法来指导外科医生改善手术结局。如果可以使用AI来预测修复的唇唇的外观,那么外科医生可以将其用作辅助手术技术来调整其手术技术并改善结果。为了在保护患者隐私时探索这个想法的可行性,我们提出了一种基于深度学习的图像镶嵌方法,该方法能够覆盖唇裂,并产生唇彩,而无需裂缝。我们的实验是在两个现实世界中的裂口数据集上进行的,并由专家cleft唇外科医生评估,以证明该方法的可行性。
translated by 谷歌翻译
在过去的25年中,我们目睹了机器学习在编译器领域的广泛应用。选择和相位订购问题。但是,有限的作品已在最先进的编译器(即LLVM)上游,以将前者无缝集成到编译器的优化管道中,以便由用户容易部署。 MLGO是此类项目的第一个项目之一,它仅努力使用强化学习使用基于ML的INLINER来减少二进制的代码大小。本文介绍了mlgoperf;第一个端到端框架,能够使用LLVM的ML Inliner优化性能。它采用二级ML模型来生成用于训练重新定位的增强学习代理的奖励,该辅助剂以前由MLGO用作主要模型。它通过预测分析功能的函数的速度加速来做到这一点,并为主要模型提供快速训练框架,否则将是不切实际的。实验结果表明,MLGOPERF在LLVM在O3时的优化方面的优化分别为SPEC CPU2006和CBENCH基准分别获得了1.8%和2.2%。此外,提出的方法为我们的基准测试带来了自动点守则区域的26%,可以将其转化为额外的3.7%速度值。
translated by 谷歌翻译
土壤侵蚀是对世界各地环境和长期土地管理的重大威胁。人类活动加速的土壤侵蚀会造成陆地和水生生态系统的极端变化,这在现场阶段(30-m)的当前和可能的未来没有得到充分的调查/预测。在这里,我们使用三种替代方案(2.6、4.5和8.5)估计/预测通过水侵蚀(薄板和RILL侵蚀)的土壤侵蚀速率,共享社会经济途径和代表性浓度途径(SSP-RCP)情景。田间尺度的土壤侵蚀模型(FSSLM)估计依赖于由卫星和基于图像的土地使用和土地覆盖的估计(LULC)集成的高分辨率(30-m)G2侵蚀模型,对长期降水量的规范观察,以及耦合模型比较项目阶段6(CMIP6)的方案。基线模型(2020年)估计土壤侵蚀速率为2.32 mg HA 1年1年,具有当前的农业保护实践(CPS)。当前CPS的未来情况表明,在气候和LULC变化的SSP-RCP方案的不同组合下,增加了8%至21%。 2050年的土壤侵蚀预测表明,所有气候和LULC场景都表明极端事件的增加或极端空间位置的变化很大程度上从南部到美国东部和东北地区。
translated by 谷歌翻译
在本文中,我们描述了一种表示音频信号的表示方法,以实现COVID-19检测任务。将原始音频样品用1D卷积过滤器进行处理,这些过滤器被参数化为余弦调制的高斯函数。这些内核的选择允许将滤纸解释为光滑的带通滤波器。过滤后的输出汇总,对数压缩并用于基于自我注意的相关加权机制。相关权重强调了时间频分解的关键区域,这对于下游任务很重要。该模型的后续层由复发架构组成,模型经过训练,以执行COVID-19检测任务。在我们对COSWARA数据集的实验中,我们表明,所提出的模型在基线系统以及其他表示学习方法上实现了显着的性能改进。此外,提出的方法被证明适用于语音和呼吸信号以及从较大的数据集中转移学习。
translated by 谷歌翻译
该报告描述了用于在第二次DICOVA挑战中使用三种不同的声学模态(即语音,呼吸和咳嗽)来检测COVID-19阳性的系统。所提出的系统基于4种不同方法的组合,每种方法都集中在问题的一个方面上,并在呼吸,咳嗽和语音轨道上分别达到86.41、77.60和84.55的盲试AUC,并且这三个轨道的融合中的AUC为85.37。
translated by 谷歌翻译